Lecture 6 : Provable Approximation via Linear Programming

نویسندگان

  • Pravesh Kothari
  • Sanjeev Arora
چکیده

One of the running themes in this course is the notion of approximate solutions. Of course, this notion is tossed around a lot in applied work: whenever the exact solution seems hard to achieve, you do your best and call the resulting solution an approximation. In theoretical work, approximation has a more precise meaning whereby you prove that the computed solution is close to the exact or optimum solution in some precise metric. We saw some earlier examples of approximation in sampling-based algorithms; for instance our hashing-based estimator for set size. It produces an answer that is whp within (1+ ) of the true answer. Today we will see many other examples that rely upon linear programming (LP). Recall that most NP-hard optimization problems involve finding 0/1 solutions. Using LP one can find fractional solutions, where the relevant variables are constrained to take real values in [0, 1]. Recall the example of the assignment problem from last time, which is also a 0/1 problem (a job is either assigned to a particular factory or it is not) but the LP relaxation magically produces a 0/1 solution (although we didn’t prove this in class). Whenever the LP produces a solution in which all variables are 0/1, then this must be the optimum 0/1 solution as well since it is the best fractional solution, and the class of fractional solutions contains every 0/1 solution. Thus the assignment problem is solvable in polynomial time. Needless to say, we don’t expect this magic to repeat for NP-hard problems. So the LP relaxation yields a fractional solution in general. Then we give a way to round the fractional solutions to 0/1 solutions. This is accompanied by a mathematical proof that the new solution is provably approximate.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lec . 2 : Approximation Algorithms for NP - hard Problems ( Part II )

We will continue the survey of approximation algorithms in this lecture. First, we will discuss a (1+ε)-approximation algorithm for Knapsack in time poly(n, 1/ε). We will then see applications of some heavy hammers such as linear programming (LP) and semi-definite programming (SDP) towards approximation algorithms. More specifically, we will see LPbased approximation for MAXSAT and MAXCUT. In t...

متن کامل

Advanced Approximation Algorithms ( CMU 18 - 854 B , Spring 2008 ) Lecture 27 : Algorithms for Sparsest Cut Apr 24 , 2008

In lecture 19, we saw an LP relaxation based algorithm to solve the sparsest cut problem with an approximation guarantee of O(logn). In this lecture, we will show that the integrality gap of the LP relaxation is O(logn) and hence this is the best approximation factor one can get via the LP relaxation. We will also start developing an SDP relaxation based algorithm which provides an O( √ log n) ...

متن کامل

CS 264 : Beyond Worst - Case Analysis Lecture # 9 : A Taste Of Compressive Sensing ∗

The last several lectures proved that polynomial-time exact recovery is possible for instances of several NP -hard problems that satisfy some type of stability condition. Lecture #7 showed that the single-link++ algorithm, which searches over a restricted set of feasible solutions, and thus can return a suboptimal solution in worst-case instances, recovers the optimal clustering in stable k-med...

متن کامل

Approximation Algorithms and Hardness of Approximation March 19 , 2013 Lecture 9 and 10 : Iterative rounding II

In the last lecture we saw a framework for building approximation algorithms using iterative rounding: 1. Formulate the problem as a linear program (LP) 2. Characterise extreme point structure 3. Iterative algorithm 4. Analysis We used this framework to solve two problems: Matchings in Bipartite Graphs and the Generalised Assignment Problem. A negative point about this approach is that it requi...

متن کامل

CSC 5160 : Combinatorial Optimization and Approximation Algorithms

In this lecture, we will talk about the technique of using Linear Programming (LP) to solve combinatorial optimization problems. The lecture is divided into two parts. In the first part, we discuss the theoretical aspects of LP and illustrate by exmaples how combinatoric problems can be reforumated as LP problems. In the second part, we introduce two popular algorithms in solving LP problems: t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016